31 research outputs found
Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandits
We study the problem of zero-order optimization of a strongly convex
function. The goal is to find the minimizer of the function by a sequential
exploration of its values, under measurement noise. We study the impact of
higher order smoothness properties of the function on the optimization error
and on the cumulative regret. To solve this problem we consider a randomized
approximation of the projected gradient descent algorithm. The gradient is
estimated by a randomized procedure involving two function evaluations and a
smoothing kernel. We derive upper bounds for this algorithm both in the
constrained and unconstrained settings and prove minimax lower bounds for any
sequential search method. Our results imply that the zero-order algorithm is
nearly optimal in terms of sample complexity and the problem parameters. Based
on this algorithm, we also propose an estimator of the minimum value of the
function achieving almost sharp oracle behavior. We compare our results with
the state-of-the-art, highlighting a number of key improvements
Estimating the minimizer and the minimum value of a regression function under passive design
We propose a new method for estimating the minimizer and
the minimum value of a smooth and strongly convex regression function
from the observations contaminated by random noise. Our estimator
of the minimizer is based on a version of
the projected gradient descent with the gradient estimated by a regularized
local polynomial algorithm. Next, we propose a two-stage procedure for
estimation of the minimum value of regression function . At the first
stage, we construct an accurate enough estimator of , which
can be, for example, . At the second stage, we estimate the
function value at the point obtained in the first stage using a rate optimal
nonparametric procedure. We derive non-asymptotic upper bounds for the
quadratic risk and optimization error of , and for the risk
of estimating . We establish minimax lower bounds showing that, under
certain choice of parameters, the proposed algorithms achieve the minimax
optimal rates of convergence on the class of smooth and strongly convex
functions.Comment: 35 page
Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm
This work studies minimization problems with zero-order noisy oracle
information under the assumption that the objective function is highly smooth
and possibly satisfies additional properties. We consider two kinds of
zero-order projected gradient descent algorithms, which differ in the form of
the gradient estimator. The first algorithm uses a gradient estimator based on
randomization over the sphere due to Bach and Perchet (2016). We
present an improved analysis of this algorithm on the class of highly smooth
and strongly convex functions studied in the prior work, and we derive rates of
convergence for two more general classes of non-convex functions. Namely, we
consider highly smooth functions satisfying the Polyak-{\L}ojasiewicz condition
and the class of highly smooth functions with no additional property. The
second algorithm is based on randomization over the sphere, and it
extends to the highly smooth setting the algorithm that was recently proposed
for Lipschitz convex functions in Akhavan et al. (2022). We show that, in the
case of noiseless oracle, this novel algorithm enjoys better bounds on bias and
variance than the randomization and the commonly used Gaussian
randomization algorithms, while in the noisy case both and
algorithms benefit from similar improved theoretical guarantees. The
improvements are achieved thanks to a new proof techniques based on Poincar\'e
type inequalities for uniform distributions on the or
spheres. The results are established under weak (almost adversarial)
assumptions on the noise. Moreover, we provide minimax lower bounds proving
optimality or near optimality of the obtained upper bounds in several cases
Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm
This work studies minimization problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth and possibly satisfies additional properties. We consider two kinds of zero-order projected gradient descent algorithms, which differ in the form of the gradient estimator. The first algorithm uses a gradient estimator based on randomization over the sphere due to Bach and Perchet (2016). We present an improved analysis of this algorithm on the class of highly smooth and strongly convex functions studied in the prior work, and we derive rates of convergence for two more general classes of non-convex functions. Namely, we consider highly smooth functions satisfying the Polyak-{\L}ojasiewicz condition and the class of highly smooth functions with no additional property. The second algorithm is based on randomization over the sphere, and it extends to the highly smooth setting the algorithm that was recently proposed for Lipschitz convex functions in Akhavan et al. (2022). We show that, in the case of noiseless oracle, this novel algorithm enjoys better bounds on bias and variance than the randomization and the commonly used Gaussian randomization algorithms, while in the noisy case both and algorithms benefit from similar improved theoretical guarantees. The improvements are achieved thanks to a new proof techniques based on Poincar\'e type inequalities for uniform distributions on the or spheres. The results are established under weak (almost adversarial) assumptions on the noise. Moreover, we provide minimax lower bounds proving optimality or near optimality of the obtained upper bounds in several cases
Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm
This work studies minimization problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth and possibly satisfies additional properties. We consider two kinds of zero-order projected gradient descent algorithms, which differ in the form of the gradient estimator. The first algorithm uses a gradient estimator based on randomization over the sphere due to Bach and Perchet (2016). We present an improved analysis of this algorithm on the class of highly smooth and strongly convex functions studied in the prior work, and we derive rates of convergence for two more general classes of non-convex functions. Namely, we consider highly smooth functions satisfying the Polyak-{\L}ojasiewicz condition and the class of highly smooth functions with no additional property. The second algorithm is based on randomization over the sphere, and it extends to the highly smooth setting the algorithm that was recently proposed for Lipschitz convex functions in Akhavan et al. (2022). We show that, in the case of noiseless oracle, this novel algorithm enjoys better bounds on bias and variance than the randomization and the commonly used Gaussian randomization algorithms, while in the noisy case both and algorithms benefit from similar improved theoretical guarantees. The improvements are achieved thanks to a new proof techniques based on Poincar\'e type inequalities for uniform distributions on the or spheres. The results are established under weak (almost adversarial) assumptions on the noise. Moreover, we provide minimax lower bounds proving optimality or near optimality of the obtained upper bounds in several cases
A gradient estimator via L1-randomization for online zero-order optimization with two point feedback
International audienceThis work studies online zero-order optimization of convex and Lipschitz functions. We present a novel gradient estimator based on two function evaluations and randomization on the -sphere. Considering different geometries of feasible sets and Lipschitz assumptions we analyse online dual averaging algorithm with our estimator in place of the usual gradient. We consider two types of assumptions on the noise of the zero-order oracle: canceling noise and adversarial noise. We provide an anytime and completely data-driven algorithm, which is adaptive to all parameters of the problem. In the case of canceling noise that was previously studied in the literature, our guarantees are either comparable or better than state-of-the-art bounds obtained by Duchi et al. (2015) and Shamir (2017) for non-adaptive algorithms. Our analysis is based on deriving a new weighted Poincar\'e type inequality for the uniform measure on the -sphere with explicit constants, which may be of independent interest
A gradient estimator via L1-randomization for online zero-order optimization with two point feedback
International audienceThis work studies online zero-order optimization of convex and Lipschitz functions. We present a novel gradient estimator based on two function evaluations and randomization on the -sphere. Considering different geometries of feasible sets and Lipschitz assumptions we analyse online dual averaging algorithm with our estimator in place of the usual gradient. We consider two types of assumptions on the noise of the zero-order oracle: canceling noise and adversarial noise. We provide an anytime and completely data-driven algorithm, which is adaptive to all parameters of the problem. In the case of canceling noise that was previously studied in the literature, our guarantees are either comparable or better than state-of-the-art bounds obtained by Duchi et al. (2015) and Shamir (2017) for non-adaptive algorithms. Our analysis is based on deriving a new weighted Poincar\'e type inequality for the uniform measure on the -sphere with explicit constants, which may be of independent interest
A gradient estimator via L1-randomization for online zero-order optimization with two point feedback
International audienceThis work studies online zero-order optimization of convex and Lipschitz functions. We present a novel gradient estimator based on two function evaluations and randomization on the -sphere. Considering different geometries of feasible sets and Lipschitz assumptions we analyse online dual averaging algorithm with our estimator in place of the usual gradient. We consider two types of assumptions on the noise of the zero-order oracle: canceling noise and adversarial noise. We provide an anytime and completely data-driven algorithm, which is adaptive to all parameters of the problem. In the case of canceling noise that was previously studied in the literature, our guarantees are either comparable or better than state-of-the-art bounds obtained by Duchi et al. (2015) and Shamir (2017) for non-adaptive algorithms. Our analysis is based on deriving a new weighted Poincar\'e type inequality for the uniform measure on the -sphere with explicit constants, which may be of independent interest